PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Wednesday the 12th of February, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:03:44] <mwolffhh> hi, someone got an idea how to query this? got a collection where *some* documents have a field "last_used", which is a datetime field. I want to select all documents where the field exists *and* has a date value earlier than specified.
[00:04:32] <mwolffhh> tried find({"last_used":{"$exists":true}, "last_used":{"$lte":{"sec":1392144183,"usec":0}}}), to no avail
[00:05:44] <mwolffhh> (absolute MongoDB beginner btw. :-))
[00:11:51] <erik508> mwolffhh: you can just do somethnig like
[00:11:55] <erik508> mongos> s = new Date(2014, 2, 12)
[00:11:55] <erik508> ISODate("2014-03-12T07:00:00Z")
[00:12:02] <erik508> mongos> db.foo.find({"last_used":{"$lte":s}})
[00:12:02] <erik508> { "_id" : ObjectId("52faba6aad6e96598f38cee0"), "x" : 1, "last_used" : ISODate("2014-02-12T00:03:54.363Z") }
[00:12:05] <erik508> { "_id" : ObjectId("52faba6dad6e96598f38cee1"), "x" : 1, "last_used" : ISODate("2014-02-12T00:03:57.953Z") }
[00:12:08] <erik508> { "_id" : ObjectId("52faba70ad6e96598f38cee2"), "x" : 1, "last_used" : ISODate("2014-02-12T00:04:00.203Z") }
[00:14:13] <erik508> that will only find documents where "last_used" exists, and matches the time comparison
[00:20:28] <txt23> How can I allow only a few IPs to connect to MongoDB instead of keeping it open to the web?
[00:21:02] <mwolffhh> erik508: yes, it works when I use a date object in my query - but what about that { "sec: 4711, "usec" 0 } variant? I use Doctrine ODM to generate queries and it automatically converts PHP DateTime objects to precisely that syntax
[00:25:20] <mwolffhh> txt23: according to this, you should probably use your OS's firewall and/or run MongoDB within a VPN: http://docs.mongodb.org/manual/core/security-network/
[00:25:49] <mwolffhh> txt23: but I am new to MongoDB, so there may be a better way
[00:26:24] <mwolffhh> txt23: but even if there is, it probably can't hurt to set this up as well as another line of defense
[00:26:26] <erik508> mwolffhh: hmm, not too familiar with Doctrine
[00:27:20] <mwolffhh> erik508: if I googled correctly, this seems to be similar to what the native MongoDate object does
[00:31:27] <txt23> mwolffhh: Yeah makes sense. I am going to have a replication server and for that I was thinking of adding only a few connections to allow list
[00:32:27] <txt23> You can run both mysql and mongo on the same server right?
[00:33:22] <mwolffhh> txt23: sure
[00:37:39] <txt23> I just installed mongo using this "yum install mongo-10gen mongo-10gen-server" but when I try to go into "mong" here is what I get "Tue Feb 11 19:34:34.002 Error: couldn't connect to server 127.0.0.1:27017 at src/mongo/shell/mongo.js:145"
[00:37:55] <txt23> Even "netstat -anp | grep 27017" doesnt show anything
[00:38:11] <txt23> And there is nothing in the log file
[00:38:47] <erik508> mwolffhh: those look like Timestamp types vs. ISODate
[00:39:05] <erik508> eg. new Timestamp(1392144183, 0)
[00:39:23] <erik508> if that's the case, you can do stuff like
[00:39:24] <erik508> mongos> db.fooie.find({"last_used": Timestamp(1392144183, 0)})
[00:39:24] <erik508> { "_id" : ObjectId("52fac1a9ad6e96598f38ceea"), "last_used" : Timestamp(1392144183, 0) }
[00:45:43] <mwolffhh> erik508: no, it's definitely an ISODate
[00:45:53] <erik508> oh. hmmph then.
[01:09:47] <joannac> txt23: installing mongo isn't enough. You have to actually start a server
[01:10:51] <txt23> joannac: I did start the server
[01:12:11] <txt23> joannac: you do it by typing mongod right?
[01:13:47] <joannac> txt23: yes, assuming you have all the paths, etc set up
[01:13:52] <joannac> ps -Alf | grep mongod
[01:13:57] <joannac> and tell me what you see
[01:14:02] <joannac> (and pastebin it please)
[01:17:13] <txt23> joannac: http://pastebin.com/ApB6ZbBF
[01:17:55] <joannac> txt23: yeah, the directory doesn't exist
[01:17:59] <joannac> create it, and try again
[01:17:59] <txt23> joannac: I see the issue there
[01:18:20] <joannac> Also, turn NUMA off
[01:19:54] <txt23> joannac: I have MySQL and web server on the same machine. Would I not need NUMA for it?
[01:27:31] <txt23> joannac: I was able to finally launch it. using this " numactl --interleave=all mongod" however I'd like mongo to be running in tht background. How can I do that?
[01:27:41] <joannac> txt23: --fork
[01:27:58] <txt23> joannac: Thanks!
[02:49:30] <meltingwax> Does mongo's map reduce implementation do successive reductions for the same key? Such as reduce(key, [reduce(key, firstHalf), reduce(key, secondHalf)]) ?
[07:19:09] <txt23> I was able to install mongo on my server. However, as it seems its fully accesible from everywhere. What are few steps that I can do to secure the connection so only my local server and one ip has access to it?
[07:29:04] <guuha> I use uuid v4 in my app as _id. Now I can't decide on how to store these, 1: untouched, as String 2: remove dashes, BINARY_SUBTYPE_BYTE_ARRAY, 3: untouched: BINARY_SUBTYPE_UUID. My db is realtime so lots of deletes + inserts. Nothing will stay in permanently
[07:29:27] <guuha> SSD backed
[09:24:19] <agni> hey all
[09:24:56] <ncls> hi
[09:25:50] <agni> i am new to mongodb, what tutorials should I follow?
[09:26:27] <agni> i have set it up on a virtual instance
[09:26:54] <agni> on a linux instance
[09:27:48] <ncls> agni: with which language do you plan to use it ?
[09:28:43] <agni> i want to handle the administration part of the mongodb
[09:29:21] <ncls> in this case, you probably want to try the online interactive tutorial : http://try.mongodb.org/
[09:29:26] <agni> ncls: say, like DBA type
[09:29:40] <agni> ok
[09:29:57] <ncls> it simulates the use of the "mongo" command
[09:30:20] <agni> ok, thats great !
[09:30:22] <ncls> when you run "mongo" on your server, you access this shell
[10:14:17] <tymohat> http://clip2net.com/s/6O20y4 mongo activity, how to investigate it?
[10:34:28] <_boot> Hi, I started draining a shard and it now says draining:true in the sharding status, however nothing seems to be happening with the number of chunks on any of the shards... using 2.5.4, is something wrong?
[11:40:14] <orweinberger> Is there a way to restore a db to a mongos in a way that will split the data to the shards instead of loading it all to one shard?
[11:47:58] <kees_> when i use a file output with a path like 'path => "test_%{+HH}"' the hour is one off; i'm in GMT+1, so right now i'd expect a file called test_12, but i get a file called test_11 - what is the best way to get the right time?
[11:52:16] <Batmandakh> Hello everyone! So, I'm thinking of use laravel php framework for a project
[11:52:29] <Batmandakh> and I want to implement MongoDB on Laravel
[11:52:44] <Batmandakh> can anyone suggest me a mongodb odm for laravel framework?
[11:53:00] <Batmandakh> I saw https://github.com/boparaiamrit/Laravel-Mandango from http://docs.mongodb.org/ecosystem/drivers/php-libraries/
[11:53:02] <Derick> Batmandakh: I would suggest you try without ODM first, especially if you have never used MongoDB
[11:53:47] <Batmandakh> without? So, how can I start without ODM in laravel?
[11:54:03] <Derick> Use the PHP driver directly? I am sure you can write models in Laravel?
[11:54:07] <Batmandakh> How do you use laravel framework? I'm new to laravel too....
[11:54:17] <Derick> I don't use (or know Laravel).
[11:54:26] <Batmandakh> oh, I'm new to Laravel...
[11:54:33] <Batmandakh> I see. Thanks Derick!
[11:55:27] <Batmandakh> Please explain me, why do you suggest without ODM? I thought using ODM is a rapid, and simple way to do things...
[11:55:41] <Batmandakh> Hey, I'm sorry for my bad English
[11:55:56] <Batmandakh> It could be looks like rude :P
[11:58:11] <Derick> Batmandakh: because an ODM hides too many things so you don't learn the underlaying technology (MongoDB)
[11:58:20] <Derick> and to use things effectively, you need to understand it
[11:59:35] <Batmandakh> @Derick I see. Thank you very much!
[12:00:45] <Mmike> Hello. What would be the best way to determine if mongodb node is primary or not, from a shell script?
[12:02:51] <Derick> Mmike: use: mongo --quiet localhost:30200 --eval "rs.isMaster()['ismaster']"
[12:03:01] <Derick> will return "true" or "false"
[12:03:33] <Mmike> Derick: magnificent! :)
[12:03:39] <Mmike> thank you, looks too obvious now :)
[13:30:49] <tscanausa> is there a way to specify the hostname of a server via the config?
[13:31:02] <Derick> config of what?
[13:31:16] <tscanausa> mongod
[13:32:30] <Derick> why would you need to set a hostname there? I am not following
[13:33:16] <kees_> if you want to run multiple instances of mongo on the same box, just use different ips/ports?
[13:33:38] <Derick> yes
[13:34:40] <tscanausa> so the problem I am having is when I try to create a replica set the first node always starts off my using the short hostname say "mongo1" and when I add other nodes to the replica set it it does not know how to resolve "mongo1" via dns so the replica set never starts
[13:35:10] <tscanausa> by never starts the secondaries never join
[13:35:17] <Derick> tscanausa: you should be able to fix the replicaset set config
[13:35:22] <Derick> you can just modify it yourself
[13:37:01] <tscanausa> Derick: so your saying the only way to set/change a hostname in mongo is to log into the mongo shell and modify the config.
[13:37:27] <Derick> yes
[13:37:27] <tscanausa> there is no way to set it on boot.
[13:37:38] <kees_> on boot it will read that config
[13:37:50] <Derick> well, an alternative is to only use DNS names and make sure all the nodes' names can be resolved on all nodes
[13:37:59] <Derick> either through DNS, or /etc/hosts
[13:40:25] <tscanausa> I have dns and hostnames set up for all the nodes like mongo1.example.com, mongo2.example.com but when the first node in a replica set only uses "mongo1" as its hostname.
[13:43:23] <tscanausa> Do you think it is far fetched to be able to tell mongod what hostname it should use on start?
[13:44:52] <Derick> tscanausa: yes, it's a system configuration thing
[13:46:10] <mierst> hmm, sounds like a feature request
[13:48:16] <tscanausa> so if I were to start a mongo cluster on aws and used cnames on my dns to aws's dns I would then have to update all of my /etc/hosts and hostnames on all servers participating in the cluster? ( this not what I am doing but an example of were setting the hostname would be useful )
[13:52:17] <Derick> tscanausa: you should use the DNS assigned names
[13:52:24] <Derick> no need to use /etc/hosts if you use the real DNS names
[14:00:18] <tscanausa> Which DNS names would be used?
[14:36:34] <amitprakash> Hi, for a relational model with tables A, B, C such that A(b=FK(B)), C(id=(pkey(A)|pkey(B)) what would be a recommended mongo document model?
[14:37:36] <Nodex> if you're application really does need that level of depth then Mongodb is probably not the right choice for your app
[14:37:44] <amitprakash> oh
[14:38:19] <amitprakash> it does, shipping application, packages inside bags inside dispatches, each of which maintain a status trail
[14:38:34] <amitprakash> s/tain/tains
[14:38:50] <hipsterslapfigh-> that looks to be only three levels deep
[14:39:04] <amitprakash> bags could contain bags, packages could contain packages
[14:39:07] <Nodex> why can't you store relevant data as nested data
[14:39:07] <amitprakash> and so on
[14:39:08] <hipsterslapfigh-> i've done 5 levels with mongo relatively painlessly
[14:39:48] <Nodex> the pain will come when you need performance
[14:40:00] <amitprakash> Nodex, the emphasis is on fast retrieval of child nodes and and status trail
[14:40:11] <hipsterslapfigh-> if you really care about performance then there's better choices to start with than mongo anyway :v
[14:40:15] <amitprakash> Or rather, immediate child nodes
[14:40:32] <Nodex> amitprakash : then I would store the relevant data as a sub document
[14:40:42] <tscanausa> amitprakash: to think of it in JSON terms you would probably want to structure it like { A: { B:{C:{}}}
[14:40:54] <Nodex> tscanausa : yes a sub document
[14:41:13] <tscanausa> Nodex: Correct, but they may not understand the term subdocument
[14:41:20] <Nodex> yawn
[14:41:51] <amitprakash> tscanausa, and what of the status trail against each of A, B and C?
[14:42:14] <amitprakash> A.status = [{..}], A.B.status= [{..}] and so on?
[14:42:19] <Nodex> B anc C are statuses
[14:42:44] <tscanausa> amitprakash: yes you could do that, any data could be stuffed in at any level.
[14:43:18] <Nodex> {a:1, b:[{'date':1235,'entry':'something happened'}],c:[{date.....}]}
[14:43:27] <Nodex> to many cooks, good luck
[14:44:35] <amitprakash> tscanausa, isnt that overly complicated? {A: {B: {C: {status: [{}]}, status: [{}]}, status: [{}]}}
[14:46:44] <tscanausa> amitprakash: depends on your needs. If c is a dependent of b it would be nice to know how many pieces of b are complete in, and then you can do the same for A, how many pieces of A (Bs) are needed to complete A. but it all depends on your requirements.
[14:47:58] <amitprakash> tscanausa, a simple enough use case of a shipment being moved around, shipments are aggregated into bags, bags are aggregated into dispatches..
[14:48:29] <Nodex> http://europa.eu/rapid/press-release_IP-14-142_en.htm
[14:48:33] <Nodex> sweet
[14:48:37] <amitprakash> The issue with subdocument is that since multiple As lie in the same C, the sub-document would be replicated entirely across multiple documents in A
[14:49:21] <Nodex> why is that an issue?
[14:50:26] <amitprakash> consider some update I wish to make to C, I'd be updating multiple documents, add to this the index cost, as a separate document, the index size would be much smaller for the sub-document than indexing keys on the larger meta document?
[14:50:48] <Nodex> C is a status
[14:51:29] <amitprakash> No, A is package, B is Bag, C is dispatch, status is a separate sub-document
[14:51:45] <amitprakash> Or if we wish to simplify it, theres package, dispatches and statuses
[14:51:57] <tscanausa> amitprakash: maybe you can reverse it, C is the parent of b and a
[14:52:28] <Nodex> a DOCUMENT is the order. Everything sub of that is somehting that happens to the ORDER
[14:53:03] <amitprakash> Let me rephrase, currently, a status table S contains entries against instances of A or B. B otoh is a parent of A
[14:53:21] <amitprakash> there is no FK relationship b/w S and A/B however
[14:54:06] <Nodex> ok I'll give you the best piece of advice for working with Mongodb. Forget EVERYTHING you know about relational databases because it does not apply
[14:54:17] <amitprakash> Right, I can accept that
[14:54:34] <amitprakash> What i want to understand is what kind of structure would work to implement this irl scenario
[14:54:40] <Nodex> so now with that in mind Model your data to fit your DB not the other way round
[14:54:44] <amitprakash> We have items of type A which have statuses
[14:54:52] <amitprakash> We have items of type B which have statuses
[14:55:02] <Nodex> A becomes a document and B becomes a document
[14:55:04] <amitprakash> We finally have B being a collection of items of type A
[14:55:13] <amitprakash> Nodex, so no sub-documents
[14:55:20] <Nodex> everything that happens to A (get's shipped, get's moved etc) ends up as a sub document of "A"
[14:55:32] <amitprakash> Fair enough
[14:55:38] <amitprakash> This is how I've approached it atm
[14:55:50] <Nodex> so what's the problem?
[14:56:32] <amitprakash> Now the next bit, how do I find out which items of A belong to B .. well currently B.a_key = [A._id]
[14:57:33] <Nodex> why are A and B related ?, if they're an order then they're two separate entities
[14:58:10] <amitprakash> B however expires in usage at certain time, at which point A is freed up.. However removing A._id from B.a_key isnt acceptable since I want to also know what items of type A used to belong to an instance of B
[14:58:30] <amitprakash> B is a set of items of type A, hence they're related
[14:59:16] <amitprakash> To solve this, I doubly stored A.parent = B._id for some B
[14:59:58] <Nodex> I'm going to simplify this for you. foo, bar, baz <--- all orders from a store, foo gets shipped and thus has an entry in a nested document with a log of this, the same happens to bar anx baz.
[15:00:01] <Nodex> HOW are they related?
[15:00:11] <Nodex> anx -> and
[15:00:51] <amitprakash> Foo gets shipped, but it gets shipped together with bar and baz under a new identity called foobar
[15:01:02] <amitprakash> so Foo, Bar and Bar are related to Foobar
[15:01:10] <amitprakash> Baz*
[15:01:15] <Nodex> you've reversed the logic and approached it from the parent
[15:01:37] <Nodex> look at it from the childs perspective
[15:02:10] <amitprakash> from the child perspective [ package ] gets a label called foobar?
[15:02:33] <Nodex> no. something happends to foo bar or baz and those documents get updated with a log of this.
[15:02:41] <amitprakash> Fair enough
[15:02:45] <Nodex> If they get moved into a larger container then that's a whole NEW collection
[15:02:53] <amitprakash> Agreed
[15:03:08] <amitprakash> so the document which reflects instances of Foo, Bar and Baz is called A
[15:03:09] <Nodex> 'containers' => a new document get's generated for the container
[15:03:21] <amitprakash> the document which reflects the container is called B
[15:03:30] <amitprakash> entirely different documents A and B
[15:03:43] <Nodex> inside that document you add foo, bar, baz (either as whole nested documents or relations)
[15:03:59] <amitprakash> mongo has relations?
[15:04:05] <Nodex> No
[15:04:19] <Nodex> but you can add the Id's (which I DONT propose)
[15:04:29] <amitprakash> Which is what I am doing as said previously
[15:04:44] <amitprakash> B.a_key = [A._id] where B is the COLLECTION and A is the order
[15:06:08] <Nodex> I'll simplify it even further. Books, Boxes, Containers. Books get sold and get put into boxes (nested), "Boxes" get copied into "Containers". Book sales are stored in a collection called (for brevity) "sales".
[15:06:56] <Nodex> Sales get udpated with what happens to them (the logs you discussed). when the "Sale" gets boxed up ready for shipping you copy the entire document into a sub document of "Boxes"7
[15:07:13] <Nodex> when the "Boxes" get packed into the container you do the same.
[15:07:25] <amitprakash> Instead of copying the entire document into a subdoc I am copying it as ids
[15:07:29] <Nodex> (into the "Containers" collection
[15:07:45] <Nodex> that's 3 joins when you don't need any
[15:07:53] <amitprakash> so Containers.subdoc_ids = list(Sales.id)
[15:08:57] <Nodex> Containers -> "a_container_document_id" has a list of all boxes. Inside those boxes are "books"
[15:09:53] <Nodex> in this way you can pull any container and see it's contents. any box and see it's books and any book to see what happened to it
[15:09:59] <Nodex> ll with one query each
[15:10:29] <amitprakash> Okay, I'll ask a simpler problem
[15:10:32] <amitprakash> There are no containers
[15:10:35] <amitprakash> only boxes and books
[15:10:51] <Nodex> same approach
[15:10:55] <amitprakash> Can I assume similarly, that the Box document has a list of all the books in it?
[15:11:10] <Nodex> correct
[15:11:59] <Nodex> on the off chance that you have to update any one book then the update requires two updates (one to the sale and one to the boxes document)
[15:12:02] <amitprakash> If so, then instead of storing an instances of Books as subdocument in Box, I am storing the list of id of Book in the Box itself
[15:12:22] <Nodex> but the data is historicl and will proably never change
[15:12:38] <amitprakash> Actually, the data changes all the time
[15:12:40] <Nodex> storing the id requires extra queries and is not needed
[15:12:58] <Nodex> is it writen more than it's read?
[15:13:04] <Nodex> written*
[15:13:07] <amitprakash> No
[15:13:19] <Nodex> once a book has been mvoed to a box does it change?
[15:13:22] <amitprakash> Yes
[15:13:29] <amitprakash> This happens a lot btw
[15:13:50] <amitprakash> assume 80% of books will change more than once
[15:14:01] <amitprakash> Even after putting into boxes
[15:14:07] <Nodex> if your reads out weigh your writes by more than 3:1 then I would still nest the documents
[15:14:21] <amitprakash> No they don't
[15:14:32] <amitprakash> For all practical purposes there are more writes than read
[15:14:33] <amitprakash> s
[15:14:52] <Nodex> you just said that it's not written more than it's read
[15:14:57] <Nodex> then you reversed that
[15:15:01] <amitprakash> Yes
[15:15:05] <amitprakash> Thought more
[15:15:12] <amitprakash> and realized the latter is true
[15:15:51] <Nodex> it's probably balanced then so either would work, obviously one way uses more disk space
[15:16:07] <amitprakash> I am not worriedd about space but about indexes
[15:16:10] <Nodex> I maintain that this level of relationships is not really intended for mongodb
[15:16:19] <amitprakash> Thanks
[15:16:29] <Nodex> what do you need to read the most?
[15:16:34] <amitprakash> This is what I've been trying to convince the supervisor
[15:16:35] <Nodex> i/e boxes or books
[15:16:36] <amitprakash> the books
[15:16:41] <amitprakash> We mostly read information on books
[15:16:54] <amitprakash> s/supervisor/project manager
[15:16:55] <Nodex> and how do you look them up . on _id?
[15:17:06] <amitprakash> Yes
[15:17:38] <amitprakash> That, or some set of attributes of Books, such as where is originated from, How far does it move
[15:17:38] <Nodex> then the index is already there and free
[15:18:06] <amitprakash> The index on books is, searching Boxes by books would be very expensive in terms of index
[15:18:16] <amitprakash> We'd be indexing Boxes.books
[15:18:32] <Nodex> I would NEVER store the books ONLY in the boxes anyway
[15:18:37] <Nodex> they're essentially a copy
[15:18:53] <amitprakash> Right, but I need to search boxes by books it might have
[15:19:14] <amitprakash> So say, give me a box which contained all books starting from NYC
[15:19:17] <Nodex> on _id?
[15:19:28] <amitprakash> Not just _id
[15:20:02] <amitprakash> We look up books by _id mostly, but most of the financial work happens over the type of book, the distance it moved, the priortiy of it etc
[15:20:34] <amitprakash> So the lookups would be on boxes against box.book.distance, box.book.priority, box.book.type
[15:20:34] <Nodex> there is no silver bullet. you can easily serach all boxes for books of a title but it obviously needs an index. IF you just store the _id of the book in the box then you need to do at least ONE more uery
[15:20:35] <amitprakash> etc
[15:20:36] <Nodex> query
[15:21:07] <amitprakash> Aight, so find books matching above parameters, then find boxes containing above books?
[15:21:16] <Nodex> copying the data into boxes is certainly more flexible
[15:21:26] <Nodex> correct
[15:21:29] <amitprakash> Hmm
[15:21:45] <Nodex> but yo have to maintain that copied data
[15:21:56] <Nodex> i/e update it if a book changes
[15:22:00] <amitprakash> Aye
[15:22:04] <amitprakash> That can be done
[15:22:09] <amitprakash> First of all, thank you for this insight, I'll think further on this see if it fits the use case
[15:22:17] <Nodex> which is where your reads vs writes comes in to play
[15:22:47] <Nodex> the highest workload (normaly reads) should have the path of least resistance (least amount of queries)
[15:22:59] <amitprakash> On a day to day basis, writes on books is >> reads on boxes
[15:23:30] <amitprakash> Reads on boxes by books happens every third day
[15:24:09] <amitprakash> Reads on books >> Writes on books however
[15:31:14] <remonvv> \o
[15:33:38] <remonvv> quick question; can anyone explain the following oddity; we're running update({a:1, b:1}, {$inc:{c:1}}, true, false) operations quite a bit. This always resulted in the expected result (only one document with {a:1, b:1} exists and an ever increasing counter for "c". However, we've just found an occurance where we have 6 documents with {a:1, b:1}. In other words, it seems to violate the atomicity guarantee.
[15:34:00] <remonvv> Can anyone think of a possible scenario where this would happen? It's a non sharded collection, through mongos.
[15:34:30] <remonvv> Documents generated by the same machine and process (the middle 6 bytes of the ObjectId) in a timespan of ~1 second.
[15:40:55] <fabio_c> Hi guys
[15:41:11] <Derick> don't forget about the girls.
[15:41:47] <Nodex> remonvv : is that an update or a findandmodify?
[15:42:04] <Nodex> I think only findAndmodify() is atomic garunteed
[15:42:49] <Derick> Nodex: standard inc ops are too
[15:42:55] <Derick> and push...
[15:42:58] <Derick> etc
[15:43:24] <fabio_c> I have a question: it is possible in mongodb get a field of a referenced object? For example, I need to select all users in my db, each user has a reference to a dogs collection but I need just the dog's name.
[15:43:51] <Derick> fabio_c: you need to do two queries
[15:43:51] <remonvv> Nodex: It'san update
[15:43:53] <remonvv> '"Warning To avoid inserting the same document more than once, only use upsert: true if the query field is uniquely indexed."
[15:43:55] <remonvv> In docs.
[15:43:58] <remonvv> Since when is that??
[15:43:58] <Derick> (or rearchitecture)
[15:44:16] <Derick> remonvv: new to me too
[15:44:17] <Nodex> fabio_c: relations are not supported
[15:44:30] <fabio_c> Derick, I needed to do this in mongohub to export some records
[15:45:20] <remonvv> Derick: Has this changed? I distinctly recall discussing this with someone here way back when and he said that all update operations have the "find" part of the update inside the write lock.
[15:45:50] <Derick> remonvv: hmm, I don't know - perhaps it'd be good to write to the mongodb-dev list about it?
[15:46:06] <Derick> it's sometimes tricky to get an authorative answer on this
[15:46:57] <remonvv> Yes. If it always worked like this it's just a new bit of documentation but if the behaviour changed there are backward compatibility problems.
[15:47:30] <Derick> remonvv: can you send me an email about it? I can send it around asking people's advice then
[15:47:43] <remonvv> Sure, PM me your addy
[15:47:49] <Derick> remonvv: or, file a server ticket in jira :-)
[15:47:52] <Derick> remonvv: it's derick@
[15:48:20] <remonvv> I don't know. Which approach gets me the answer the quickest :p
[15:49:24] <remonvv> I'll JIRA it anyway.
[15:49:30] <Derick> remonvv: so do both :D
[15:49:40] <Nodex> slighty OT but anyone know a good company to speak to regarding buildign an cross platform (Android, iOS, Winblows, Blackberry)?
[15:49:57] <remonvv> a cross platform what?
[15:50:05] <Derick> Nodex: maybe. Not sure whether they do winblows/crackberry :-)
[15:50:20] <Nodex> remonvv : an App, sorry thought that was implicit
[15:50:40] <Nodex> Derick : I'll take it, not sure about them myself!
[15:50:52] <Derick> Nodex: http://www.egeniq.com/services/
[15:50:58] <Nodex> ty ty
[15:51:06] <Derick> they don't mention crackberry
[15:51:11] <remonvv> Fair enough. There are tons of app cookers. There are also a lot of JS based frameworks that build to those platforms.
[15:51:13] <Derick> Nodex: feel free to name drop if you talk to them :-)
[15:51:44] <Nodex> I have a client who wants an app and I would rather farm the work out tbh
[15:52:01] <Nodex> and yes I will drop your name and have them send you some Whisky!
[15:52:05] <Derick> hehe :D
[15:52:14] <Derick> I got another bottle yesterday. Win.
[15:52:29] <Derick> it's handy to have an amazon wishlist for them
[16:10:57] <Joeskyyy> Anyone fancy with the perl driver and replsets by chance? :D
[17:42:52] <atmo> hello, I'd like to create separate objects based on a 2nd level child of a document, but it seems that .forEach can only be used on collections?
[17:43:35] <atmo> i would like to convert each of db.collection.team.players.playerA...Z to individual playerA...Z documents
[17:45:09] <atmo> http://pastebin.com/jTcpBZzw at line4 i get the error that (players) has no method foreach
[18:17:47] <Joeskyyy> atmo: you mean something like this: db.test.find().forEach(function(t){for (var p in t.team.players) print (p)})
[18:17:48] <Joeskyyy> ?
[18:19:55] <atmo> i will give that syntax a shot
[18:21:07] <atmo> seems to work Joeskyyy, thanks. wasn't sure which js functions I could co-opt :)
[18:22:24] <Joeskyyy> In the shell, pretty much everything works JS wise :D
[18:22:41] <atmo> although it doesn't work when i try to print the p.name property
[18:22:51] <atmo> db.teams.find().forEach(function(t){for (var p in t.players) print (p.name)})
[18:22:57] <ghanima> hello all
[18:22:59] <ghanima> quick question
[18:23:33] <Joeskyyy> Can you give us an example document atmo?
[18:23:45] <Joeskyyy> pastebin if you can please
[18:24:45] <atmo> http://pastebin.com/4tcjfE0f Joeskyyy
[18:24:56] <atmo> i would like to create new objects from each of players.*
[18:25:48] <atmo> the key value of each players child can be discarded
[18:27:18] <ghanima> I have a 8core 124GB mongodb slaves that is primarily read heavy database(with very very very little writes) All queries seem to fit into memory very well... However from a performance standpoint my LOADavg spikes above the amount of cores that I have on the machine by 800 fold.. I mean load averages as high as 300.... However all my clients so far seem to be getting queries results in a reasonable ammount of time and are not seeing
[18:27:24] <ghanima> when connecting to mongodb server
[18:28:01] <ghanima> The only thing that I can see(still digging into this) is that RUN queue has moments of burst like 30 to 150 processes in the r state
[18:28:10] <Joeskyyy> p.Name wouldn't work since you would technically need to call (as an example) t.players.medic.name
[18:28:16] <ghanima> outside of that this is the only thing that I am seeing that I can attribute to the spike
[18:28:18] <atmo> yeah
[18:28:28] <ghanima> Has anyone else seen this level of activity in mongo
[18:28:37] <Joeskyyy> And you can't use the positional operate in a function I don't think.
[18:28:38] <ghanima> I mean on a MongoDB server
[18:28:56] <atmo> i was hoping i could do like $(t.players).each(e,i){...} kind of thing
[18:29:14] <Joeskyyy> ghanima: Remember that mongoDB is single threaded as of right now
[18:30:39] <atmo> could, um, iterate through an array of ['medic','demoman'...] key names?
[18:31:41] <Joeskyyy> that would work too, as long as that's constant. I'm trying to poke my brain for a better way of doign that though
[18:34:02] <Joeskyyy> To be clear atmo, you're wanting to essentially retrieve the entirety of the subdocument (say "medic") to use?
[18:34:21] <atmo> yeah
[18:34:43] <atmo> i realized that having player information as children of a team was stupid since players change team all of the time
[18:34:54] <atmo> so it would be more logical to assign a team property field to the player
[18:46:48] <Joeskyyy> atmo: The only way I can really think of would be to just script a way to pull the information you need
[18:46:52] <Joeskyyy> then create the documents that way
[18:46:52] <Joeskyyy> :\
[18:47:33] <Joeskyyy> Even using aggregation can't work, since it's not in an array (your team)
[18:47:46] <Joeskyyy> If your players had been in an array instead, we could have unwound it using aggregation
[18:49:37] <atmo> yeah, I'm going to have to do like if p = "medic" etc
[18:49:38] <atmo> oh well
[18:52:42] <atmo> db.teams.find().forEach( function(t){ for (var p in t.players) { print(t.players[p].Name) } } )
[18:52:45] <atmo> this is a start :p
[18:56:00] <Joeskyyy> There we go :D
[19:08:24] <atmo> http://pastebin.com/KngcpUEd Joeskyyy that was all it took in the end
[19:09:57] <Joeskyyy> sweeeet
[19:10:45] <Joeskyyy> ah, i didn't realize you didn't need the "scout1" "scout2" parts
[19:10:46] <Joeskyyy> very nice
[19:11:05] <Joeskyyy> backlogging that one in my mind haha
[19:11:06] <atmo> yeah it was already under "Role"
[19:11:17] <Joeskyyy> yeah totally overlooked that when looking at your documents
[19:11:33] <atmo> i only added that "scout1" etc bit because .. iirc.. firefox didn't like me creating objects without keys
[19:12:19] <Joeskyyy> well best of luck restructuring now that you have what you need :D
[19:12:24] <atmo> thanks!
[19:12:37] <atmo> cheers for your help and encouragement
[19:12:44] <Joeskyyy> anytime my friend
[19:29:24] <Mmike> Hi. How can I verify if journaling is enabled?
[19:31:11] <Joeskyyy> db.serverStatus()
[19:31:15] <Joeskyyy> Check for the "dur" field
[19:31:40] <Joeskyyy> orj ust do db.serverStatus().dur
[19:32:27] <oblio> hulla!
[19:45:24] <hahuang65> does mongo optimize a query like this db.collection.find({id: 1, $or: [{id: {$in: [1, 2, 3]}}])
[19:47:38] <Joeskyyy> define optimize?, also that's not how your use $or
[19:49:01] <Mmike> Hi, guys. I'm constantly getting "assertion 13440 bad offset:0 accessing file"
[19:49:08] <Mmike> i run repair, and after a while, again these errors
[19:50:59] <Joeskyyy> did you recently move anything?
[19:51:13] <Mmike> well, the app is doing tons of updates
[19:51:16] <Mmike> constantly
[19:51:51] <hahuang65> Joeskyyy: what do you mean that's not how you use or? You mean it's syntactically incorrect or are you trying to assume what I'm trying to do?
[19:52:05] <Joeskyyy> Syntactically
[19:52:19] <hahuang65> Joeskyyy: ah okay, what's wrong with it?
[19:52:34] <Joeskyyy> db.collection.find({$or: [{id: 1}, {id: {$in: [1, 2, 3]}}])
[19:52:42] <hahuang65> Joeskyyy: hah
[19:52:45] <Joeskyyy> It's kinda weird to think about especially if you've ever programmed
[19:52:47] <hahuang65> Joeskyyy: you're assuming that's what I want
[19:52:55] <hahuang65> Joeskyyy: that's not syntax
[19:53:07] <hahuang65> Joeskyyy: issue here is, that query I typed, is literally the query I have
[19:53:15] <hahuang65> Joeskyyy: I understand that the $or isn't or'ing with anything
[19:53:29] <Joeskyyy> Well you're obviously doing something wrong if you're going to be catty I wouldn't suggest expecting help ;)
[19:53:39] <hahuang65> Joeskyyy: haha, lemme explain
[19:53:45] <hahuang65> Joeskyyy: a rubygem is generating this query for me
[19:53:54] <hahuang65> Now, I want to know, since the $or is obviously useless here
[19:54:16] <hahuang65> does Mongo take care of that for me, or do I have to remove thie $or clause from each query in my app to not have a slowdown in my queries
[19:54:58] <hahuang65> $or clause has 1 condition in it, which means it's the same as doing {id: 1, id: {$in: [1, 2, 3]}}, which is the same as {id: 1}
[19:55:27] <hahuang65> this is clearly a bug in the gem, but I'm wondering if I have to handle it myself, or if mongo will see that there's a way to optimize the query down to {id: 1}
[19:55:49] <Joeskyyy> Mongo will try to find the shortest path it possible can take.
[19:56:00] <Joeskyyy> s/possible/possibly
[19:57:40] <hahuang65> Joeskyyy: haha is that guaranteed in this case
[20:01:09] <Joeskyyy> I suppose it depends on bit on your dataset and how it's indexed.
[20:01:47] <Joeskyyy> http://pastebin.com/0frc6tgY
[20:02:08] <Joeskyyy> Given a (small albeit) dataset like that, since _id is indexed, it was effecient regardless.
[20:03:19] <hahuang65> appreciate the help
[20:27:06] <timing> Hi all, is there a way to disable stop words in the mongo text index beta?
[20:27:22] <timing> a user search for "all of me" which is the name of a song but couldn't find it
[20:27:42] <timing> s/search/searched/
[20:27:56] <timing> there were no results
[20:27:59] <timing> (as expected)
[20:39:57] <mst1228> what's the easiest way to limit the number of things i can put in an array field of a document?
[20:40:23] <mst1228> just doing a manual check before update, or is there anyway to limit it automatically?
[20:49:54] <cheeser> mst1228: there isn't a way, no.
[20:50:10] <mst1228> cheeser: cool, thanks a lot
[21:01:32] <hahuang65> mst1228: it's unlikely that this will fit your use case, but there's always a slight chance You can check out capped collections
[21:02:25] <hahuang65> mst1228: i'm not sure whether you can cap a collection based on a field or not.
[21:03:58] <cheeser> ...
[21:04:21] <cheeser> collections are capped on size in bytes or doc count
[21:21:14] <hahuang65> cheeser: on size, but you can also specify a doc count
[21:21:18] <hahuang65> cheeser: http://docs.mongodb.org/manual/core/capped-collections/
[21:22:08] <hahuang65> cheeser: it was a reach, but it was just a suggestion. I don't know what his use case may be, but it MAY (on an off chance) fit the use case into capped collections
[21:22:23] <cheeser> 16:01 < cheeser> collections are capped on size in bytes or doc count
[21:22:31] <cheeser> yes. doc count. i know. and said so.
[21:22:54] <hahuang65> cheeser: I got confused and thought it was a question, my bad.
[21:48:55] <tongcx> hi guys, is it possible to do .find({[field1, field2] : [1,3]})?
[21:50:56] <NoirSoldats> I'm going to do a horrible job explaining this problem, but hang with me. We have a document with an embedded array of 'certifications', when we remove one and persist the entire document again the removed certification is still there. We are using Symfony2 and the Doctrine ODM.
[21:57:48] <tongcx> this room seems not very active
[21:58:22] <cheeser> NoirSoldats: consider using $pull
[21:58:31] <cheeser> tongcx: when you tried what happened?
[21:58:45] <cheeser> but no, that is not the syntax for find()
[22:00:31] <tongcx> cheeser: yes, it doesn't allow that
[22:00:49] <tongcx> cheeser: so I want to do '$within : box'
[22:00:58] <tongcx> cheeser: but my lat and lng are two fields instead of one
[22:02:48] <cheeser> this? http://docs.mongodb.org/manual/reference/operator/query/geoWithin/
[22:03:41] <joannac> tongcx: lat : $lt X, $gt X2, lon: $lt Y, $gt Y2
[22:03:51] <joannac> or do you want a rotated box?
[22:10:28] <tongcx> joannac: yes that works
[22:10:43] <tongcx> joannac: just is there anyway to rearrange fileds before queries
[22:10:44] <tongcx> ?
[22:22:31] <NoirSoldats> So, here's what seems to be the problem... We have a set of certifications "certifications" : [ {"name" : "Cert 1", "desc" : "Cert 1 Desc" },{"name" : "Cert 2", "desc" : "Cert 2 Desc" }] and then we want to empty that up so we send "certifications" : [] but when we pull that document back up it still has the original certs.
[22:24:02] <joannac> tongcx: rearrange? in what sense?
[22:24:43] <joannac> NoirSoldats "send" is not a valid word. What do you actually mean? update? what's the exact update statement?
[22:28:31] <NoirSoldats> joannac: We don' thave the exact update statement, we're using the Doctrine ODM.. Let me see if there's an equivilent to getSQLQuery()
[22:35:56] <freeone3000> I'm getting a "WARNING: Server seen down: 10.0.1.6:27017" from my driver when `mongo --host 10.0.1.6 --port 27017` works just fine. Any suggestions?
[22:36:32] <joannac> intermittent network problem?
[22:36:38] <freeone3000> That particular server is a mongos instance, and each shard is a replication set, but currently every server is visible using 10.0.1.0/24 to the host.
[22:36:43] <freeone3000> joannac: I'd agree, if it wasn't repeatable.
[22:37:19] <joannac> you testing on the same server as whereever your code (suing the driver) is?
[22:37:23] <freeone3000> joannac: Yes.
[22:37:43] <joannac> turned on verbose logging in the driver?
[22:41:55] <freeone3000> joannac: Running with -DDEBUG.MONG=true -DDB.TRACE=true, log: https://gist.github.com/freeone3000/80c4efe62712f7e2925b
[22:42:54] <joannac> Um, that's not the error you reported
[22:43:11] <joannac> can't find a master
[22:43:21] <joannac> are all your replsets up?
[22:43:57] <freeone3000> joannac: How should I check this from mongo console?
[22:44:43] <joannac> connect to a node from each replset and run rs.status()
[22:45:01] <NoirSoldats> joannac: We're using doctrine's ->persist() followed by a ->flush().. I'm starting to think the problem lies in there..
[22:45:51] <freeone3000> joannac: Gives me https://gist.github.com/freeone3000/8c5fff3aff25faa4e0ba , which looks right to me.
[22:46:59] <freeone3000> joannac: 10.0.1.6 is mongos, 10.0.1.7 and 10.0.1.8 are members, 10.0.1.6 is also an arbiter on a separate port in a separate process. We have a PRIMARY, a SECONDARY, an ARBITER, all three config servers, I don't see why the driver would cause a failure.
[22:47:26] <joannac> freeone3000: you only have one shard?
[22:47:40] <freeone3000> joannac: Yeah, this is supposed to prove the architecture works with our services.
[22:47:55] <freeone3000> joannac: So we have a sharded server with one shard.
[22:50:36] <joannac> what's your connection string?
[22:52:58] <freeone3000> joannac: new MongoClient(Lists.newArrayList(new ServerAddress("10.0.1.6"), new ServerAddress("10.0.1.6")), new MongoClientOptions.Builder().readPreference(ReadPreference.primaryPreferred()).writeConcern(WriteConcern.UNACKNOWLEDGED).build()));
[22:53:28] <joannac> why are you initiating with a list with 2 identical IPs?
[22:53:57] <freeone3000> joannac: Because I only have one mongos server and the code that gets to this point expects two.
[22:54:30] <joannac> 10.0.1.6:27017 is your mongoS?
[22:54:36] <freeone3000> joannac: Yes.
[22:55:10] <joannac> 10.0.1.6 can see 10.0.1.7:27017?
[22:56:06] <freeone3000> joannac: Yes.
[22:56:56] <freeone3000> joannac: And 10.0.1.7 can see 10.0.1.8, and 10.0.1.8 can see 10.0.1.7 and 10.0.1.6, and they can all see 10.0.1.6 on a different port, too, and moreover, they can all see 10.0.1.243, which is where the application happens to be running, and it can see all of them.
[22:57:27] <freeone3000> joannac: Verified with `mongo` and `tcping` every step of the way.
[22:58:14] <joannac> then I don't know
[22:58:20] <joannac> easily reproduceable?
[22:58:57] <freeone3000> joannac: Every single time.
[22:59:37] <joannac> what version of mongodb and driver?
[23:01:30] <freeone3000> joannac: mongos is running 2.4.9, each mongod instance on the shard is 2.4.8 (except for the arbiter, who is 2.4.9); driver version is ... 2.12.0-COMPANYNAME
[23:01:48] <joannac> java driver?
[23:02:15] <freeone3000> joannac: yes, org.maluubadb:mongo-java-driver:2.12.0-COMPANYNAME.
[23:03:22] <joannac> okay, the official driver only goes to 2.11
[23:03:32] <joannac> so i dunno if you built your own or what
[23:06:27] <freeone3000> Yeah, looks like we applied a patch to dns resolution caching.
[23:07:21] <SirFunk_> if i'm using authentication .. is there a way to make one database so that anyone can access it withot username/pass?
[23:26:00] <freeone3000> Updated to 2.11.0, ran into https://jira.mongodb.org/browse/JAVA-793 . Is there a fix in upstream for this?
[23:27:11] <cheeser> um. it says right there. 2.11.1.
[23:35:40] <ehershey> hi
[23:36:06] <ehershey> just saying
[23:43:34] <freeone3000> new version of driver, new error. Version 2.11.3 gives "com.mongodb.MongoException: can't find a master". Same client. Still all reachable. rs.status() is still okay. sh.status() is still okay. `mongo` still works.
[23:47:37] <freeone3000> Running 2.4.8, there should be no problem specifying two members of a replicaset in the host for a shard, correct?