PMXBOT Log file Viewer

Help | Karma | Search:

#mongodb logs for Tuesday the 10th of June, 2014

(Back to #mongodb overview) (Back to channel listing) (Animate logs)
[00:46:12] <Lawhater> Hello, what port does mongo have on mac?
[00:46:29] <Lawhater> I need to assign a port number to MONGODB_PORT in Python mongokit
[00:46:36] <Lawhater> or can i pick any
[00:57:37] <joannac> default is 27017. you can pick whatever you want, but don't clash it with something else
[03:26:11] <abonilla> does anyone know how to configure mongodb.conf so that it starts as part of a shard or replica?
[03:32:15] <joannac> abonilla: add the replSet option?
[03:39:26] <abonilla> joannac: using the config file, not by running a full start command.
[03:40:29] <abonilla> joannac: I now see an example file, so replSet = xxx also works in the config file.
[03:40:39] <joannac> yes
[03:41:03] <abonilla> so all servers I start with that config will "find" each other?
[03:41:35] <abonilla> ah no, that is with rs.add
[03:45:13] <javeln> hey all, question on using MongUpdateStorage with pig
[03:45:22] <javeln> i'm using '{_id:{\$oid:"\$uuid"}}' as my query but i keep getting 'invalid ObjectId [$uuid]' as an error
[03:45:44] <javeln> it seems like it's not substituting the `uuid` field from the tuples
[03:47:04] <javeln> when i did '{_id:{\$oid:"<validoid>"}}' it didn't throw an error, but it then it only update that 1 dock
[03:47:49] <javeln> *doc* . . . how do i use the _id as the query?
[08:02:09] <coRz> good morning all
[08:02:25] <coRz> anyone here?
[08:03:42] <dragoonis> Porting all my background processing jobs from MySQL -> MongoDB now :)
[08:03:50] <dragoonis> Today is a MongoDB day
[08:04:01] <coRz> i got a question regarding saving string in mongodb
[08:04:14] <coRz> i got css document that i want to save in the db
[08:04:29] <coRz> the problem is the '.class' dot in the classes i have
[08:05:59] <coRz> anyone?
[08:06:42] <rspijker> well.. fields names can;t have dots
[08:07:20] <coRz> yeah that i understood
[08:07:29] <coRz> so whats the solution for that? replacement ?
[08:07:32] <rspijker> so, either make sure dots never appear in field names, or make a little wrapper that replaces dots in field names with some unicode character not likely to show up in your actual content and replace it back on read
[09:00:54] <Nopik_> hi, i'm experiencing some weird situation with mongo, after inserting the record, i do query for it, and it is missing, although if i retry query e.g. second later, it gets returned. it happens both on my devel machine (just single mongod) as in production (where there are 3 servers + sharding). writes are done with { w: 'majority' }, although if I do { w: 3
[09:00:54] <Nopik_> } it works. It is not a replication lag, as I send my queries to master. It is 2.4, though, not 2.6 yet. Have anyone seen something similar?
[09:47:56] <Industrial> Hi.
[09:47:58] <Industrial> https://gist.github.com/Industrial/f141d1564a7cb70cb3d0
[09:48:03] <Industrial> made a quick script to remove some collections
[09:48:08] <Industrial> but they are not being removed
[09:48:12] <Industrial> any idea why?
[09:48:25] <Industrial> line 26 prints the same collection count every run
[09:48:33] <Industrial> so the remove isnt actually removing?
[09:48:38] <Nodex> coffee script :(
[09:48:56] <Industrial> YESYESYES
[09:49:14] <Nodex> can you compile that to proper javascript, perhaps someone can help then
[09:49:24] <Industrial> fuck. off.
[09:49:38] <Industrial> i've thought this would happen before I posted it
[09:49:53] <Nodex> ok good luck then idiot
[09:50:03] <Industrial> I'm not going to bother anymore with people that go MEEH it something I dont like
[09:50:16] <Industrial> I'm now asking specifically for help-but-not-yours
[09:50:25] <Nodex> lol retard, good luck with that
[09:50:28] <Industrial> cs is a runtime
[09:50:30] <Industrial> deal with it
[09:50:37] <Nodex> you're an idiot, deal wiht it
[09:50:38] <Industrial> its a damn remove method
[09:50:45] <Industrial> help me with the remove method
[09:50:54] <Nodex> another one for the ignore list
[09:51:05] <Industrial> dont be a retard over the first thing you see that you can write off as something you dont like
[09:51:07] <Industrial> thats not HELP
[09:51:11] <Industrial> thats being a giant DICK
[09:51:23] <Industrial> have a nice day
[09:51:34] <Nodex> children like you crack me up, you seem to think that help is a god given right
[09:55:10] <Industrial> Hi. I have a Java/Python/Ruby/Perl/Lua/whateveryounameit problem with mongodb
[09:55:22] <Industrial> I'm calling a remove method but I dont see collections removed
[09:56:06] <Industrial> for clarity and because people are generally unable to read this one language called coffeescript, I've taken the liberty of converting all code I write every day into something you might understand
[09:56:15] <Industrial> just to ask a question
[09:56:37] <Industrial> https://gist.github.com/Industrial/2b39c5f043368c294cc3
[09:57:01] <Industrial> So I am calling remove on line 31
[09:57:03] <Industrial> Any idea?
[09:59:52] <Nodex> LOL
[10:00:05] <Industrial> (The annoying thing about this all is that probably I'm not going to get any help anymore because of my outburst earlier, but I was the one asking a question, the one getting a "NO I DONT LIKE WHAT YOU LIKE SO UR STUPID" in his face)
[10:00:06] <Nodex> children like you crack me up, you seem to think that help is a god given right and that everyone should bend to you to help.
[10:00:27] <Industrial> I'm 27, let's move on
[10:00:30] <Industrial> to the content of my question
[10:00:34] <Nodex> Then act like it Tom
[10:00:58] <Industrial> so admit you childish response to my question and i will
[10:01:00] <Industrial> and well call it even
[10:01:26] <Industrial> <some proberb about pots and kettles>
[10:01:43] <Nodex> lmao, good luck with that, you told me to fu
[10:01:56] <Nodex> "fuck off" then went on some rant about coffee script
[10:02:21] <Industrial> horses not allowed indoors, please get off
[10:02:39] <Nodex> that's not the normal behaviour of a grown up person seeking help
[10:02:57] <Industrial> its one of one getting the same cocky attitude for years on irc
[10:03:08] <Industrial> because people dont like what you like
[10:03:15] <Industrial> anyway mongo has an API in a gazliion languages
[10:03:24] <Industrial> I coulve asked a question about/with any of them
[10:03:33] <Industrial> YOU dont like coffee, why should I not because you dont?
[10:03:42] <Nodex> did I say anything about not liking it?
[10:03:46] <Industrial> YOU are being childish, I'm just the victim of it
[10:03:56] <Nodex> ALL I ASKED FOR was to compile it to Javascript
[10:03:59] <Industrial> "redo it in a language I want to understand, or i wont help"
[10:04:34] <Nodex> why on earth should anyone help someone who is not willing to help themselves?
[10:04:46] <Nodex> your arrogance is astounding
[10:06:33] <Industrial> well I've got some things to think about, maybe one for you could be to think about wether the best reply to any question asked wether you agree with the philosophical backings of the programming language involved with a question on IRC would be "meet my demands or receive no help" or not.
[10:07:13] <Industrial> I'll stop wasting my time now and actually fix the thing.
[10:11:04] <Nodex> LOL this is why you're childish - because you still seem to want to blame someone else because you cannot be bothered to compile coffee script into a language that many more people in this channel understand. I have been in this channel for the better part of 3 years and I have NEVER seen coffee script mentioned once by any of the regulars so I asked you to compile it down to HELP YOU OUT.
[10:11:04] <Nodex> Your chioldish attitude immediately saw it as an attack on your beloved language and you went on a nerd rage. Perhaps if you had thought before you typed you would have an answer by now
[10:11:50] <Nodex> Secondly, abusing people is NOT the way to get help.
[10:15:21] <Industrial> I actually spent time thinking about converting it to js first before I asked. I don't blame anyone for my inability to convert CS to JS. I am blaming you for the type of response you gave me that is by now so typical of JS users that dislike coffeescript. It's not the first nor last time this will happpen but I opted to go for the CS gist
[10:15:44] <Industrial> what if i've asked the question with a python API?
[10:16:06] <Industrial> I would have been fine with receiving no help because maybe you don't have experience with the python API
[10:16:40] <Industrial> the same goes for coffeescript, but the usual response you get from people is 'hey its somewhat similat to what i.. oh no.. coffeescript..."
[10:17:31] <Industrial> It's also not my beloved language, because I do this at work. I prefer LiveScript if you have to know
[10:18:47] <Industrial> I seem to be fighting for equality.
[10:19:54] <Nodex> I personaly don;t know python but many people in here do. I am not debating languages with you, I have better things to do. You don't seem to want to do what's needed to get help so either wait for someone who knows CS or compile it / write it in a more sparse language that more people know. Else you probably won't get help
[10:26:28] <Industrial> Nodex: I've since pasted the js output
[10:26:30] <Industrial> but the issue was
[10:26:34] <Industrial> 'use drop not remove'
[10:27:11] <lxsameer> hey guys, is it possible to search for all Site documents which have an embedded Domain document with value of "example.com"
[10:27:28] <Derick> lxsameer: what's a "Site document" ?
[10:27:43] <lxsameer> Derick: I use that name for an example
[10:27:52] <Derick> lxsameer: can you show an example document?
[10:27:58] <Derick> (in a pastebin please)
[10:28:28] <lxsameer> Derick: i didn't write any thing yet, I'm confused about the theory
[10:29:03] <Derick> well, if your document is: { blah: { 'Domain' : 'example.com' } }
[10:29:09] <Derick> then you can easily search on that
[10:29:28] <Derick> with db.colname.find( { 'blah.Domain' : 'example.com' } );
[10:29:58] <lxsameer> Derick: does that 'Domain' class is a embedded document ?
[10:30:11] <Derick> { 'Domain' : 'example.com' }
[10:30:14] <Derick> is the embedded document
[10:30:23] <lxsameer> Derick: thanks buddy
[11:08:09] <mischat> yay to RC1 ... there is a bug fix in there which is stopping us upgrading to 2.6.0 ... I wonder how many more release candidates there will be
[11:30:10] <daslicht> wghen using mongo native i get results after inserting an item like this:
[11:30:10] <daslicht> [ { content: 'click to edit',
[11:30:10] <daslicht> _id: 5396eb2e5e5c5ab6d1a2ab05 } ]
[11:30:12] <daslicht> to be able to access the keys i would have to use result[0]._id
[11:30:14] <daslicht> is there a way to get rid of the array ?
[11:30:16] <daslicht> or what am i missing please
[11:41:28] <sleepr> is a collection in mongo about the same as a table in mysql?
[11:41:48] <rspijker> sleepr: about
[11:45:17] <Derick> sleepr: with a difference is that not every row (document) needs to have the same fields
[11:48:40] <sleepr> Derick: hm ok. i think ill have to do some more reading :P thanks :D
[12:01:04] <Gr1> Hi everyone
[12:01:46] <ildiroen> Hey
[12:02:07] <Gr1> I have a replica set with 4 slaves
[12:02:47] <Nodex> cool
[12:02:50] <Gr1> I could see that sometimes, some slave is giving slow queries, when compared to other slaves
[12:03:07] <Gr1> All are identical boxes.
[12:03:14] <Gr1> What are the points I should look for?
[12:03:21] <Gr1> Queries per second are the same on each one
[12:03:37] <Gr1> This is mostly for reads
[12:03:59] <Gr1> No writes are happening when this happened
[12:04:15] <Gr1> but faults were high when this slow query started
[12:04:57] <ildiroen> What have you tried so far?
[12:05:23] <Gr1> indexed my collections,
[12:05:34] <Gr1> but it is still happening after that
[12:05:39] <Gr1> and it is happening on random hosts
[12:06:31] <Gr1> I could see
[12:06:31] <Gr1> serverStatus was very slow:
[12:06:42] <Gr1> in the logs when this occured
[12:07:59] <Gr1> The amount of RAM available is sufficient to fit the index and data
[12:13:59] <Gr1> Any help/pointers on what should I be looking?
[13:24:27] <AlexejK> Gr1: Is it the same type of query that is slow?
[13:26:20] <michaelchum> Hi, I made a new server with mongod --syncdelay 0, I am inserting 30 million documents with pymongo but it going exponentially slow, the insertion pauses (no CPU usage) every 10 minutes before inserting for 30 second, the pauses grow longer and longer
[13:26:45] <michaelchum> My insertion script has been runnin 4 days now, it's just going slower and slower any ideas?
[13:27:25] <cheeser> 4 days seems a tad excessive. :)
[13:27:30] <rspijker> michaelchum: flushes to disk?
[13:27:51] <cheeser> is there an index on the collection (beside the _id index?)
[13:29:40] <michaelchum> Nope, no indexing inside my script!
[13:31:00] <michaelchum> rspijker: I was looking into flushing and fsync() but I don't really understand how it works and during the pauses, there is no CPU usage nor Hard disk writing
[13:31:22] <michaelchum> cheeser: yeah it's been a long long run : /
[13:32:07] <cheeser> have you considered using mongoimport?
[13:33:02] <rspijker> oh, wait, syncdelay 0 means that files aren’t synced to disk
[13:34:09] <michaelchum> cheeser: You mean using the terminal command mongoimport to import CSV/JSON files? I need to do some computations in Python before insertion
[13:34:09] <rspijker> so… what’s the memory footprint of this thing?
[13:34:18] <cheeser> michaelchum: ah
[13:35:26] <michaelchum> rspijker: How do I check the memory footprint? htop tells me 80.2GB of virtual memory and 74.4 MEM% not sure what this means
[13:35:48] <rspijker> well… if your memory footprint approaches RAM, mongo will have to do something
[13:35:59] <rspijker> since you’re telling it not to sync with disk, everything has to be in RAM
[13:36:09] <rspijker> when you get to the limits of what will fit, it will need to flsuh to disk
[13:36:25] <michaelchum> rspijker: But MongoDB is taking lots of space on my drive in dbpath
[13:36:56] <rspijker> define “lots of space”
[13:37:02] <rspijker> also, how much RAM does the machine have?
[13:37:14] <cheeser> "Do not set this value on production systems. In almost every situation, you should use the default setting."
[13:37:15] <michaelchum> 4GB of RAM
[13:37:30] <rspijker> yeah, so that’s going to run into issues...
[13:38:01] <rspijker> what your mongod is doing is storing as much as possible on RAM. When that’s full it needs to sync to disk and will do so quite aggresively. This is most likely causing the delays
[13:38:16] <rspijker> although it is weird that you;re not seeing any disk activity during that time.. how are you checking that exactly?
[13:38:39] <michaelchum> Using iostat
[13:40:02] <michaelchum> Thanks guys! I'll try taking out --syncdelay 0 although I had the same issue last week, and read somewhere on StackOverflow that --syncdelay 0 would fix it but it seems to be getting worse -.-'
[13:40:19] <michaelchum> But thank you so much rspijker and cheeser, I really appreciate :)
[13:40:22] <rspijker> there’s really no reason to use syncdelay unless there are some very specific circumstances
[13:40:33] <rspijker> good luck :)
[13:41:00] <michaelchum> Because the pauses were about 60seconds at the beginning so it made sense, but after, it's weird
[13:41:17] <michaelchum> Thanks!
[13:41:49] <rspijker> make sure you;re not using settings like noprealloc and things like that
[13:41:54] <rspijker> they can cause unexpected pauses
[13:45:20] <michaelchum> I only have mongod --dbpath "mypath" is this okay?
[13:46:24] <rspijker> that should be fine, yeah
[13:46:47] <michaelchum> all right, great
[14:06:26] <kas84> hi guys
[14:07:10] <kas84> can I have a mongodb in several partitions?
[14:07:30] <tscanausa> kas84: several partitions?
[14:07:34] <rspijker> kas84: as in, split the data over several paritions as in hard drives?
[14:07:40] <kas84> yeps
[14:07:51] <tscanausa> lvm is the only way I know
[14:07:59] <Nodex> you can mount the partitions in your directory as points
[14:08:02] <rspijker> in linux that’s fairly easy. You can just use directoryperdb and mount different drives to the dirs in question
[14:08:34] <rspijker> if you have only 1 DB, the only option is LVM, but then it’s not really partitions anymore at all
[14:08:59] <kas84> rspijker: that’s exactly what I was looking for, thanks!
[14:09:02] <Nodex> you can still mount a partition onto a directory INSIDE your /path/to/data directory
[14:09:34] <rspijker> kas84: cool :)
[14:09:51] <kas84> rspijker: what about performance doing such things?
[14:10:23] <rspijker> well, that all depends on your drives of course
[14:11:02] <kas84> but nothing extra, right?
[14:11:13] <rspijker> and the layout of the partitions. If they are all on the same drive then it might be a little crappier… But nothing horrible, I’d imagine
[14:11:29] <rspijker> there is no penalty to using directoryperdb afaik
[14:11:36] <kas84> I am concerned about adding more ebs to my amazon ec2 machine
[14:11:55] <kas84> the thing is, it’s just one db
[14:12:01] <kas84> but quite a lot of data
[14:12:07] <aboudreault> b
[14:12:38] <rspijker> well… if it’s only 1 DB, directoryperdb won’t really help you...
[14:19:43] <user55> Hello, I would like to ask about array-s in mongo documents.
[14:19:48] <user55> Working on a tagging system where a tag-item relation is managed with arrays embedded in documents.
[14:19:53] <user55> As an implementation tag may hold a list of items OR an item may hold a list of it's tags.
[14:20:00] <user55> Each solution will result very large arrays, with more than a million entries.
[14:20:05] <user55> Can mongoDB handle such large arrays OR are there any better solutions?
[14:20:09] <user55> Any help is very welcome!
[14:21:07] <rspijker> it can, as long as the document size (including embeded arrays) is not larger than 16MB
[14:22:35] <user55> so is this a proper solution?
[14:25:39] <ELFrederich> hey guys... curious what kind of API mongodb has underneith. Is it as simple to use as Redis where I can use /usr/bin/nc to communicate with it?
[14:26:39] <JT-EC> Should mongo 2.6.x install headers and lib files? I seem to be missing a huge number of files trying to build it the same way that worked for 2.4.
[14:27:59] <og01_> ELFrederich: perhaps your looking for the wire protocol? http://docs.mongodb.org/meta-driver/latest/legacy/mongodb-wire-protocol/
[14:28:47] <ELFrederich> og01_, yeah... we need to support archaic languages like TCL
[14:29:12] <og01_> ELFrederich: http://docs.mongodb.org/meta-driver/latest/tutorial/
[14:29:15] <ELFrederich> og01_, Redis has a TCL client...not sure if MongoDB has one, but if it is simple enough to implement it doesn't matter
[14:29:30] <rspijker> user55: well… not all that great, actually
[14:30:26] <user55> rspijker: can you suggest a better one?
[14:30:44] <rspijker> mongo objectIDs are 12 bytes. So 1.5 M of them will be more than will fit in a single document
[14:31:15] <rspijker> will you really have more than a million tags? :/
[14:31:25] <user55> probably yes
[14:31:30] <rspijker> and will there be items that have more than a million tags?
[14:31:44] <user55> yes
[14:32:29] <user55> it's kind of a knowledge system database
[14:32:46] <user55> and it's going to be very complex
[14:34:13] <rspijker> well… you could do chunking. As in have a tags collections where the actual unique tags live. Then have a tagChunks collection which refer to the tags by _id and each of them can have a part of the itemList. Then you need to ensure in your application layer that each chunk will stay below the 16MB limit
[14:36:32] <user55> so a Tag, Item and Relation will be separated doduments?
[14:36:45] <user55> *documents
[14:43:42] <rspijker_> but instead of having a single embeded array on the Tag document (item would work exactly the same way), the array is split over several documents each referring to the tag document
[14:45:49] <user55> then this might be a more suitable solution for us on a large scale?
[14:47:47] <Darkwater> I want to store quests in a collection; each quest has multiple objectives. Should I just put the objectives in an array in the quest document or is there something better I can do?
[14:50:36] <Nodex> yup
[14:51:58] <Darkwater> Nodex: like what?
[14:53:28] <Nodex> sotre them in an array :)
[14:53:31] <Nodex> store*
[14:53:56] <Nodex> perhaps you can dupe the data if you need access to it for other reasons?..
[14:54:35] <Darkwater> Nodex: you lost me
[14:54:45] <Darkwater> duplicate to access it for other reasons?
[14:54:48] <rspijker> it’s going to be fine to just put them in an embedded array Darkwater
[14:55:03] <Darkwater> okay thanks
[14:55:06] <Nodex> Darkwater : it's hard to know what your usecase is, so people saying it's fine really won't help you
[14:55:43] <Darkwater> Nodex: I just need to store and retrieve quests
[14:55:51] <rspijker> if you want to use the objectives outside of a quest scenario, it might be usefull to have them stored separately from that quest. Bit realistically, will you ever do that? I think not
[14:55:59] <Darkwater> exactly
[14:56:18] <Nodex> you might not think that rspijker but not everyone thinks the same and everyone has different usecases
[14:56:42] <Nodex> Darkwater : then it's a self explanitory obvious answer
[14:57:16] <Nodex> if you think quests will take up more than 16mb then you will have to re-think your strategy
[14:57:43] <Darkwater> doubt they will
[14:57:50] <Darkwater> why?
[14:58:01] <rspijker> true Nodex, but I think for 99% of all people this will work. The 1% can come back if they actually run into a problem. I prefer that over having to explain every little detail or ask about every little detail before giving an answer. It’s a choice :)
[14:58:33] <Nodex> I find it better to educate people the reasons then perhaps next time they can answer themselves of help others :)
[14:58:50] <Nodex> Darkwater : because documents have a 16mb limit (currently)
[14:59:03] <Darkwater> ah
[15:01:31] <rspijker> certainly something to consider. And I do tend to be a bit more ‘explaining’ on questions that I find interesting (no offense Darkwater, just because I don’t find it interesting doesn’t mean it’s not a valid question)
[15:02:41] <Darkwater> none taken
[15:28:16] <cozby> Hi, I've setup a replica set - works fine, but I would like to make some config changes in mongod.conf and have them take affect
[15:28:24] <cozby> how do you restart mongod?
[15:28:43] <cozby> service mongod restart, when I tried that it locked up on shuttingdown
[15:29:02] <Joeskyyy> sudo?
[15:29:02] <cozby> and then I had to delete the lock and restart it
[15:29:18] <cozby> can I kill -HUP <mongoPID> ?
[15:29:24] <cozby> will that reload the config?
[15:29:43] <Joeskyyy> a service restart should work, but you'd need sudo to kill the lock file.
[15:30:14] <cozby> Joeskyyy: hmm right, I am sudo'ing
[15:30:25] <cozby> but it locks on shutdown and then I have to delete the lock file
[15:30:28] <cozby> that can't be a normal thing
[15:30:43] <cozby> is that general practice?
[15:30:48] <cozby> sudo service mongod restart
[15:30:56] <Joeskyyy> That's what I do. haha
[15:31:11] <cozby> I see...
[15:31:32] <cozby> well I appreciate your input Joeskyyy
[15:39:21] <inad922> hello
[15:39:56] <inad922> is there a way to make a query in mongodb where I have say 5000 entries and I want 100 entries from 4100 to 4200?
[15:40:23] <Derick> inad922: yes, limit() and skip()
[15:40:36] <inad922> Derick, thanks
[16:13:32] <shesek> Is there any cost incurred of storing large subdocuments, versus storing them as completely separate documents on another collection?
[16:13:41] <shesek> s/of/from
[16:15:18] <saml> Tue Jun 10 11:17:54 [conn1891318] command articles.$cmd command: { getlasterror: 1, j: true } ntoreturn:1 keyUpdates:0 reslen:101 1206ms
[16:15:20] <saml> hey what is that?
[16:15:23] <saml> is that problem?
[16:15:29] <saml> whenever that happens in the log, things go crazy
[16:37:22] <Nodex> shesek : the lookup?
[16:39:48] <shesek> Nodex, if I load the parent document without the subdocument (by specifying a list of fields), will there by any performance penalty from the existence of a large subdocument?
[16:53:20] <shoshy> question , the involves mongodb... i hope someone can help. I'm using the bitnami AMI with mongo on, tried to do "mongo admin -u admin -p bitnami" (which is the default according to http://wiki.bitnami.com/Components/mongoDB#How_to_change_the_MongoDB_root_password.3f) and i get auth faild. I also tried resetting the password by modifing mongodb.conf and restarting the server and following instructions. Still can't login.
[16:53:39] <shoshy> It's a cryout for help basically as this isn't bitnami.. .but maybe someone has been in my spot before.
[17:00:01] <inad922> How can I indicate the total number of entries in datatables if I get pages via ajax calls?
[17:01:30] <inad922> Wrong channel sorry
[17:01:47] <inad922> How can I get the fields on which I have an index in mongoalchemy?
[17:06:50] <Lawhater> Hello, if I have a DocumentField.. but I want multiple DOcuments.. how do I type that in the terminal? http://pastebin.com/dG8rxiL3
[17:07:10] <Lawhater> I have options = db.DocumentField(Option) but there will be multiple options.. :S
[17:19:02] <shesek> When an upsert results in an update to an existing document, is there any way to get the ID of that document?
[17:34:10] <daslicht> is there a way to generate objectd (_id) in teh browser with js ?
[17:38:06] <daslicht> ok i obey, i poll the server
[18:00:27] <insanidade> quick question: I have a simple flask-mongoengine app with two models (A and B, where A has many B's and B has a reference to A through ReferenceField). I created a sample .js file so that I could insert some testing data for both collections in mongodb. The problem is that it looks like that relationship between both models (collections A and B in mongodb) do not exist. Is that the expected behavior?
[18:26:12] <linojon> hi, using mongo_mapper, if i subsclass a model, is there a way to store the subclass items in separate collection from the parent ones?
[18:26:36] <styles> How can I determine Mongos performance? If it's pulling from disk a lot etc.. ?
[18:28:52] <joshua> Mongos doesn't use disk.
[18:29:10] <joshua> Or did you mean Mongo's heh
[18:29:20] <tscanausa> they do use disk since they log
[18:29:56] <styles> db.runCommand( { serverStatus: 1, workingSet: 1, metrics: 0, locks: 0 } )
[18:29:56] <styles> got it
[18:31:07] <joshua> Forgot about the logs, but I don't think that would have a lot of impact on query performance.
[18:31:31] <joshua> Best way to track everything is start using MMS
[18:38:25] <linojon> wrt my question, i tried set_collection_name on the subclass but gets an unexpected error
[18:38:48] <saml> pymongo.MongoClient(uri, w=1, j=True) I have this. and writes ton of data every 10 minutes. during writes, some queries return empty. why?
[18:48:41] <Nodex> perhaps you have a disk error and the jounral is erroring or timing out?
[18:52:43] <arghav> So, I got this error "Btree::insert: key too large to index" and the entire document disappeared. Is that weird?
[18:53:06] <arghav> I upgraded mongo from 2.4 to 2.6
[18:58:02] <kali> arghav: http://docs.mongodb.org/manual/release-notes/2.6-compatibility/
[18:59:18] <arghav> Damn it. I should have read that.
[18:59:27] <kali> yes, you shoud :)
[18:59:34] <kali> +l
[19:00:04] <arghav> But I wasn't expecting mongo would just give up the data just because indexing failed. :(
[19:00:35] <saml> Nodex, no.. it's very consistent
[19:01:04] <saml> another thing i notice is whenever that script runs (every 10 minutes), so many connections are open from other members of replicaset
[19:09:29] <Darkwater> I want to change a string in an object in an array in a document, what's the best way to do that?
[19:10:56] <Nodex> on multiple docs?
[19:11:01] <Darkwater> on a single doc
[19:11:33] <cheeser> findAndModify with a $pull and a $push ?
[19:13:50] <Darkwater> eh never mind, I'm gonna restructure my collection
[19:15:14] <Darkwater> eh, isn't there a way to have like subcollections?
[19:15:27] <Darkwater> like queryable arrays
[19:15:35] <Darkwater> of documents
[19:22:03] <cheeser> you can query arrays on documents
[20:24:49] <cozby> is there a way to reload the config file without restarting?
[20:24:51] <cozby> like kill -HUP?
[20:24:59] <Derick> I don't think so
[20:25:16] <cozby> Derick: so you have to stop start?
[20:25:27] <Derick> I think so
[20:25:30] <cozby> I have a replica set config, when I do that, I take it my primary will change?
[20:25:42] <Derick> that's possible if you're doing that for the primary node
[20:26:07] <Derick> if this is a production environment, you should step the primary down first before you modify the config, but only after you've udpated for all the secondaries first
[20:28:28] <cozby> Derick: ah, good call
[20:28:55] <cozby> this isn't prod, its stage, but yes thats definitely something i'll do for prod
[20:30:06] <cozby> ugh, the strange thing is whenever I do sudo service mongod restart, it just hangs at stopping mongod:
[20:30:29] <Derick> i've not had much success with it either
[20:30:39] <Derick> a good ctrl-C does the trick though
[20:30:39] <cozby> Derick: so whats your process?
[20:30:42] <cozby> yep
[20:30:49] <cozby> then I usually have to delete the lock
[20:30:51] <cozby> and restart
[20:30:53] <cozby> er start
[20:31:26] <cozby> good to know I'm not in the dark, but also sad...
[20:32:26] <Derick> i don't tend to start it with service either
[20:55:24] <daidoji> does Mongo have plans for compressed collections in the future?
[20:57:05] <saml> hey, I have {id1: {count: count1}, id2: {count: count2}, ..} how can I update multiple documents (id1, id2, ..) ?
[20:57:10] <saml> in one .update() call
[20:57:14] <saml> or something simialr
[20:57:20] <saml> right now, i'm doing in a loop
[20:57:47] <saml> for (var k in stuff) { collection.update({_id: k}, stuff[k]); }
[20:59:25] <saml> update parameter of update function must be a document? can it be a function?
[21:03:19] <cmendes0101|> can't you do an update with a where that covers that?
[21:03:51] <cmendes0101|> or is that a doc with id1,id2 being subdocs?
[22:36:30] <ac360> Can anyone recommend the best way to save long text (i.e., product decsriptions) in a mongoDB database? I'm using the MEAN stack with Mongoose, and I would like to preserve line breaks if possible...